National Repository of Grey Literature 2 records found  Search took 0.01 seconds. 
Risk-sensitive and Mean Variance Optimality in Continuous-time Markov Decision Chains
Sladký, Karel
In this note we consider continuous-time Markov decision processes with finite state and actions spaces where the stream of rewards generated by the Markov processes is evaluated by an exponential utility function with a given risk sensitivitycoefficient (so-called risk-sensitive models). If the risk sensitivity coefficient equals zero (risk-neutral case) we arrive at a standard Markov decision process. Then we can easily obtain necessary and sufficient mean reward optimality conditions and the variability can be evaluated by the mean variance of total expected rewards. For the risk-sensitive case, i.e. if the risk-sensitivity coefficient is non-zero, for a given value of the risk-sensitivity coefficient we establish necessary and sufficient optimality conditions for maximal (or minimal) growth rate of expectation of the exponential utility function, along with mean value of the corresponding certainty equivalent. Recall that in this case along with the total reward also its higher moments are taken into account.
The Variance of Discounted Rewards in Markov Decision Processes: Laurent Expansion and Sensitive Optimality
Sladký, Karel
In this paper we consider discounted Markov decision processes with finite state space and compact actions spaces. We present formulas for the variance of total expected discounted rewards along with its partial Laurent expansion. This enables to compare the obtained results with similar results for undiscounted models.

Interested in being notified about new results for this query?
Subscribe to the RSS feed.